6 research outputs found
Query Training: Learning a Worse Model to Infer Better Marginals in Undirected Graphical Models with Hidden Variables
Probabilistic graphical models (PGMs) provide a compact representation of
knowledge that can be queried in a flexible way: after learning the parameters
of a graphical model once, new probabilistic queries can be answered at test
time without retraining. However, when using undirected PGMS with hidden
variables, two sources of error typically compound in all but the simplest
models (a) learning error (both computing the partition function and
integrating out the hidden variables is intractable); and (b) prediction error
(exact inference is also intractable). Here we introduce query training (QT), a
mechanism to learn a PGM that is optimized for the approximate inference
algorithm that will be paired with it. The resulting PGM is a worse model of
the data (as measured by the likelihood), but it is tuned to produce better
marginals for a given inference algorithm. Unlike prior works, our approach
preserves the querying flexibility of the original PGM: at test time, we can
estimate the marginal of any variable given any partial evidence. We
demonstrate experimentally that QT can be used to learn a challenging
8-connected grid Markov random field with hidden variables and that it
consistently outperforms the state-of-the-art AdVIL when tested on three
undirected models across multiple datasets
3D Neural Embedding Likelihood for Robust Probabilistic Inverse Graphics
The ability to perceive and understand 3D scenes is crucial for many
applications in computer vision and robotics. Inverse graphics is an appealing
approach to 3D scene understanding that aims to infer the 3D scene structure
from 2D images. In this paper, we introduce probabilistic modeling to the
inverse graphics framework to quantify uncertainty and achieve robustness in 6D
pose estimation tasks. Specifically, we propose 3D Neural Embedding Likelihood
(3DNEL) as a unified probabilistic model over RGB-D images, and develop
efficient inference procedures on 3D scene descriptions. 3DNEL effectively
combines learned neural embeddings from RGB with depth information to improve
robustness in sim-to-real 6D object pose estimation from RGB-D images.
Performance on the YCB-Video dataset is on par with state-of-the-art yet is
much more robust in challenging regimes. In contrast to discriminative
approaches, 3DNEL's probabilistic generative formulation jointly models
multi-object scenes, quantifies uncertainty in a principled way, and handles
object pose tracking under heavy occlusion. Finally, 3DNEL provides a
principled framework for incorporating prior knowledge about the scene and
objects, which allows natural extension to additional tasks like camera pose
tracking from video
DURableVS: Data-efficient Unsupervised Recalibrating Visual Servoing via online learning in a structured generative model
Visual servoing enables robotic systems to perform accurate closed-loop control, which is required in many applications. However, existing methods either require precise calibration of the robot kinematic model and cameras or use neural architectures that require large amounts of data to train. In this work, we present a method for unsupervised learning of visual servoing that does not require any prior calibration and is extremely data-efficient. Our key insight is that visual servoing does not depend on identifying the veridical kinematic and camera parameters, but instead only on an accurate generative model of image feature observations from the joint positions of the robot. We demonstrate that with our model architecture and learning algorithm, we can consistently learn accurate models from less than 50 training samples (which amounts to less than 1 min of unsupervised data collection), and that such data-efficient learning is not possible with standard neural architectures. Further, we show that by using the generative model in the loop and learning online, we can enable a robotic system to recover from calibration errors and to detect and quickly adapt to possibly unexpected changes in the robot-camera system (e.g. bumped camera, new objects)